-
-
Notifications
You must be signed in to change notification settings - Fork 31.7k
gh-132551: make BytesIO object free-thread safe #132616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation looks fine, but I'm worried that concurrent use of BytesIO
is an actual use-case that we need to consider.
For example, I could see someone using a logger that writes to an IO-like object from multiple threads. If they wanted to log to a BytesIO
buffer (e.g., for temporarily suppressing the output), then that would scale pretty terribly with critical sections. I'm speculating, though.
Well, that's why I asked about a specific better benchmark to test with because
from io import BytesIO
from random import randint
import threading
def write(barrier, b):
barrier.wait()
b.write(b'0' * randint(100, 1000))
def check(funcs, *args):
barrier = threading.Barrier(len(funcs))
thrds = []
for func in funcs:
thrd = threading.Thread(target=func, args=(barrier, *args))
thrds.append(thrd)
thrd.start()
for thrd in thrds:
thrd.join()
if __name__ == "__main__":
count = 0
while True:
print(count := count + 1)
check([write] * 10, BytesIO()) |
Well, yeah, we need to fix the races. I'm just wondering how we should fix it. Would you mind benchmarking (with this PR) the case I brought up against different numbers of threads? I want to see how bad the contention gets (and compare it to a GILful build as well). |
Ok, will do a bunch of tests tomorrow. |
EDIT: I just realized with a good bit of rewriting lock-free reads might be possible. EDIT2: Nope, see below. TL;DR: GIL build performance is same as expected while noGIL makes multithread writes possible and otherwise seems ok. Detail: noGIL PR has a minor hit to single-threaded read and a smaller hit to single-threaded write (cmdline timeit). Multithreaded timings are unstable (especially write, run the script and u will get 2x variation for that), but read looks slower than main while write is much faster than main with explicit locking (which is the only thing to compare it to as otherwise writes crash).
Times in ns, lower is better.
Ping @ZeroIntensity, timings as you requested. import sys
from io import BytesIO
from threading import Barrier, Lock, Thread
from time import time_ns
BESTOF = 5
NLINES = 50_000
LINE = b'.'*99 + b'\n'
LLEN = len(LINE)
LOCK = Lock()
def read(barrier, out, b):
barrier.wait()
t0 = time_ns()
while b.read(LLEN): pass
out[0] = time_ns() - t0
def readline(barrier, out, b):
barrier.wait()
t0 = time_ns()
while b.readline(): pass
out[0] = time_ns() - t0
def write_locked(barrier, out, b):
barrier.wait()
t0 = time_ns()
for _ in range(NLINES):
with LOCK:
b.write(LINE)
out[0] = time_ns() - t0
def write(barrier, out, b):
barrier.wait()
t0 = time_ns()
for _ in range(NLINES):
b.write(LINE)
out[0] = time_ns() - t0
def check(func, nthrds, b, is_wr): # is_wr compensates for write total size *= nthrds while read total size is fixed
barrier = Barrier(nthrds)
outs = []
thrds = []
for _ in range(nthrds):
out = [0]
thrd = Thread(target=func, args=(barrier, out, b))
outs.append(out)
thrds.append(thrd)
thrd.start()
for thrd in thrds:
thrd.join()
return sum(o[0] for o in outs) / ((nthrds if is_wr else 1) * NLINES)
def checkbest(func, nthrds, setup, is_wr):
best = float('inf')
for _ in range(BESTOF):
best = min(best, check(func, nthrds, setup(), is_wr))
return best
if __name__ == "__main__":
nthrds = int((sys.argv + ['1'])[1])
print(f'read: ', checkbest(read, nthrds, lambda: BytesIO(LINE * NLINES), False), 'ns')
print(f'readline: ', checkbest(readline, nthrds, lambda: BytesIO(LINE * NLINES), False), 'ns')
print(f'write_locked: ', checkbest(write_locked, nthrds, lambda: BytesIO(), True), 'ns')
print(f'write: ', checkbest(write, nthrds, lambda: BytesIO(), True), 'ns') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many functions are just wrapped in critical section. I wonder, what if add special flag, METH_FT_SYNC, and when it set, wrap the method calling code in critical section?
If you're talking about stuff like |
I mean that this would work also for methods that do not use Argument Clinic. Was this idea discussed before? |
Couldn't tell you that, I'm relatively new here (: But one drawback I can see would be that it would only work when calling via py mechanism, if u called directly from C you would still have to do the critical section on your own as opposed to a clinic-generated function which would have it, no? |
Yeah, I've suggested it in the past. Apparently, it's been tried and failed before. I don't know much of the details. |
Ok, had a look and lock-free reads can almost certainly be done but they would not have the data integrity guarantees you would want from a file-like object without sync. They would work in the same way lock-free multithreaded iteration works, could get dup data and might not sync position correctly wrt writes, which is not what you expect from a "file". So scratch that idea. |
Full
pyperformance
suite times (can anyone can suggest better benchmark forBytesIO
?):Changes,
*
functions had critical section added,-
not. Atomics were added forexports
field:Notes:
buf
and orpos
atomic everywhere.test_free_threading.test_io.TestBytesIO
clean under--parallel-threads
TSAN.bytesiobuf_getbuffer
doesn't need critical sections because as far as I see its only used forBytesIO
inbytesio.c
where its use is protected by a critical section in_io_BytesIO_getbuffer_impl
(its also used for a random non-BytesIO
-related test in_testcapimodule.c
).BytesIO
unshare_buffer
in threads on a free-threaded build #132551